15 research outputs found

    Camera Calibration from Dynamic Silhouettes Using Motion Barcodes

    Full text link
    Computing the epipolar geometry between cameras with very different viewpoints is often problematic as matching points are hard to find. In these cases, it has been proposed to use information from dynamic objects in the scene for suggesting point and line correspondences. We propose a speed up of about two orders of magnitude, as well as an increase in robustness and accuracy, to methods computing epipolar geometry from dynamic silhouettes. This improvement is based on a new temporal signature: motion barcode for lines. Motion barcode is a binary temporal sequence for lines, indicating for each frame the existence of at least one foreground pixel on that line. The motion barcodes of two corresponding epipolar lines are very similar, so the search for corresponding epipolar lines can be limited only to lines having similar barcodes. The use of motion barcodes leads to increased speed, accuracy, and robustness in computing the epipolar geometry.Comment: Update metadat

    HyperRes: Efficient Hypernetwork-Based Continuous Image Restoration

    Full text link
    Continuous image restoration attempts to provide a model that can restore images with unseen degradation levels during training at inference time. Existing methods are limited in terms of either the accuracy of the restoration, the range of degradation levels they can support, or the size of the model they require. We introduce a novel approach that achieves the optimal accuracy of multiple dedicated models for a wide range of degradation levels with the same number of parameters as a single base model. We present a hypernetwork that can efficiently generate an image restoration network to best adapt to the required level of degradation. Experiments on popular datasets show that our approach outperforms the state-of-the-art for a variety of image restoration tasks, including denoising, DeJPEG, and super-resolution

    Event Retrieval Using Motion Barcodes

    Full text link
    We introduce a simple and effective method for retrieval of videos showing a specific event, even when the videos of that event were captured from significantly different viewpoints. Appearance-based methods fail in such cases, as appearances change with large changes of viewpoints. Our method is based on a pixel-based feature, "motion barcode", which records the existence/non-existence of motion as a function of time. While appearance, motion magnitude, and motion direction can vary greatly between disparate viewpoints, the existence of motion is viewpoint invariant. Based on the motion barcode, a similarity measure is developed for videos of the same event taken from very different viewpoints. This measure is robust to occlusions common under different viewpoints, and can be computed efficiently. Event retrieval is demonstrated using challenging videos from stationary and hand held cameras

    CEL-Net: Continuous Exposure for Extreme Low-Light Imaging

    Full text link
    Deep learning methods for enhancing dark images learn a mapping from input images to output images with pre-determined discrete exposure levels. Often, at inference time the input and optimal output exposure levels of the given image are different from the seen ones during training. As a result the enhanced image might suffer from visual distortions, such as low contrast or dark areas. We address this issue by introducing a deep learning model that can continuously generalize at inference time to unseen exposure levels without the need to retrain the model. To this end, we introduce a dataset of 1500 raw images captured in both outdoor and indoor scenes, with five different exposure levels and various camera parameters. Using the dataset, we develop a model for extreme low-light imaging that can continuously tune the input or output exposure level of the image to an unseen one. We investigate the properties of our model and validate its performance, showing promising results

    The Gray-code filter kernels

    Get PDF
    Abstract In this paper we introduce a family of filter kernels -the Gray-Code Kernels (GCK) and demonstrate their use in image analysis. Filtering an image with a sequence of Gray-Code Kernels is highly efficient and requires only 2 operations per pixel for each filter kernel, independent of the size or dimension of the kernel. We show that the family of kernels is large and includes the Walsh-Hadamard kernels amongst others. The GCK can be used to approximate any desired kernel and as such forms a complete representation. The efficiency of computation using a sequence of GCK filters can be exploited for various real-time applications, such as, pattern detection, feature extraction, texture analysis, texture synthesis, and more

    Integrative genomic analysis by interoperation of bioinformatics tools in GenomeSpace

    Get PDF
    Integrative analysis of multiple data types to address complex biomedical questions requires the use of multiple software tools in concert and remains an enormous challenge for most of the biomedical research community. Here we introduce GenomeSpace (http://www.genomespace.org), a cloud-based, cooperative community resource. Seeded as a collaboration of six of the most popular genomics analysis tools, GenomeSpace now supports the streamlined interaction of 20 bioinformatics tools and data resources. To facilitate the ability of non-programming users’ to leverage GenomeSpace in integrative analysis, it offers a growing set of ‘recipes’, short workflows involving a few tools and steps to guide investigators through high utility analysis tasks
    corecore